Goto

Collaborating Authors

 machine learning 2023


Progress with Convolutional neural networks part1(Machine Learning 2023)

#artificialintelligence

Abstract: Convolutional neural networks (CNNs) are a representative class of deep learning algorithms including convolutional computation that perform translation-invariant classification of input data based on their hierarchical architecture. However, classical convolutional neural network learning methods use the steepest descent algorithm for training, and the learning performance is greatly influenced by the initial weight settings of the convolutional and fully connected layers, requiring re-tuning to achieve better performance under different model structures and data. Combining the strengths of the simulated annealing algorithm in global search, we propose applying it to the hyperparameter search process in order to increase the effectiveness of convolutional neural networks (CNNs). In this paper, we introduce SA-CNN neural networks for text classification tasks based on Text-CNN neural networks and implement the simulated annealing algorithm for hyperparameter search. Experiments demonstrate that we can achieve greater classification accuracy than earlier models with manual tuning, and the improvement in time and space for exploration relative to human tuning is substantial.


New applications of Explainable artificial intelligence part4(Machine Learning 2023)

#artificialintelligence

Abstract: Explainable AI (XAI) systems are sociotechnical in nature; thus, they are subject to the sociotechnical gap -- divide between the technical affordances and the social needs. However, charting this gap is challenging. In the context of XAI, we argue that charting the gap improves our problem understanding, which can reflexively provide actionable insights to improve explainability. Utilizing two case studies in distinct domains, we empirically derive a framework that facilitates systematic charting of the sociotechnical gap by connecting AI guidelines in the context of XAI and elucidating how to use them to address the gap. We apply the framework to a third case in a new domain, showcasing its affordances. Finally, we discuss conceptual implications of the framework, share practical considerations in its operationalization, and offer guidance on transferring it to new contexts.


New Developments in Deep Learning part1(Machine Learning 2023)

#artificialintelligence

Abstract: Neural networks drive the success of natural language processing. A fundamental property of natural languages is their compositional structure, allowing us to describe new meanings systematically. However, neural networks notoriously struggle with systematic generalization and do not necessarily benefit from a compositional structure in emergent communication simulations. Here, we test how neural networks compare to humans in learning and generalizing a new language. We do this by closely replicating an artificial language learning study (conducted originally with human participants) and evaluating the memorization and generalization capabilities of deep neural networks with respect to the degree of structure in the input language.


New Developments in Deep Learning part2(Machine Learning 2023)

#artificialintelligence

Abstract: A large amount of feedback was collected over the years. Many feedback analysis models have been developed focusing on the English language. Recognizing the concept of feedback is challenging and crucial in languages which do not have applicable corpus and tools employed in Natural Language Processing (i.e., vocabulary corpus, sentence structure rules, etc). However, in this paper, we study a feedback classification in Mongolian language using two different word embeddings for deep learning. We compare the results of proposed approaches.


New Developments in Deep Learning part3(Machine Learning 2023)

#artificialintelligence

Abstract: In the forensic studies of painting masterpieces, the analysis of the support is of major importance. For plain weave fabrics, the densities of vertical and horizontal threads are used as main features, while angle deviations from the vertical and horizontal axis are also of help. These features can be studied locally through the canvas. In this work, deep learning is proposed as a tool to perform these local densities and angle studies. We trained the model with samples from 36 paintings by Velázquez, Rubens or Ribera, among others.


New Research on Generative adversarial networks part5(Machine Learning 2023)

#artificialintelligence

Abstract: We present LM-GAN, an HDR sky model that generates photorealistic environment maps with weathered skies. Our sky model retains the flexibility of traditional parametric models and enables the reproduction of photorealistic all-weather skies with visual diversity in cloud formations. This is achieved with flexible and intuitive user controls for parameters, including sun position, sky color, and atmospheric turbidity. Our method is trained directly from inputs fitted to real HDR skies, learning both to preserve the input's illumination and correlate it to the real reference's atmospheric components in an end-to-end manner. Our main contributions are a generative model trained on both sky appearance and scene rendering losses, as well as a novel sky-parameter fitting algorithm.


New Research on Generative adversarial networks part4(Machine Learning 2023)

#artificialintelligence

Abstract: The conventional understanding of adversarial training in generative adversarial networks (GANs) is that the discriminator is trained to estimate a divergence, and the generator learns to minimize this divergence. We argue that despite the fact that many variants of GANs were developed following this paradigm, the current theoretical understanding of GANs and their practical algorithms are inconsistent. In this paper, we leverage Wasserstein gradient flows which characterize the evolution of particles in the sample space, to gain theoretical insights and algorithmic inspiration of GANs. We introduce a unified generative modeling framework -- MonoFlow: the particle evolution is rescaled via a monotonically increasing mapping of the log density ratio. Under our framework, adversarial training can be viewed as a procedure first obtaining MonoFlow's vector field via training the discriminator and the generator learns to draw the particle flow defined by the corresponding vector field.


Working with Long short-term memory models part1(Machine Learning 2023)

#artificialintelligence

Abstract: The release of toxic gases by industries, emissions from vehicles, and an increase in the concentration of harmful gases and particulate matter in the atmosphere are all contributing factors to the deterioration of the quality of the air. Factors such as industries, urbanization, population growth, and the increased use of vehicles contribute to the rapid increase in pollution levels, which can adversely impact human health. This paper presents a model for forecasting the air quality index in Nigeria using the Bi-directional LSTM model. The air pollution data was downloaded from an online database (UCL). The dataset was pre-processed using both pandas tools in python.


Working with Long short-term memory models part2(Machine Learning 2023)

#artificialintelligence

Core-collapse supernovae (CCSNe) are expected to emit gravitational wave signals that could be detected by current and future generation interferometers within the Milky Way and nearby galaxies. The stochastic nature of the signal arising from CCSNe requires alternative detection methods to matched filtering.


New Methods in Image Classification part4(Machine Learning 2023)

#artificialintelligence

Abstract: A good feature representation is the key to image classification. In practice, image classifiers may be applied in scenarios different from what they have been trained on. This so-called domain shift leads to a significant performance drop in image classification. Unsupervised domain adaptation (UDA) reduces the domain shift by transferring the knowledge learned from a labeled source domain to an unlabeled target domain. We perform feature disentanglement for UDA by distilling category-relevant features and excluding category-irrelevant features from the global feature maps.